-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Finetune with PEFT #99
Conversation
@coderabbitai review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial quick read. Looks good. Will read more thoroughly on Monday. Nice work on this!
Co-authored-by: Alex Strick van Linschoten <[email protected]>
Co-authored-by: Alex Strick van Linschoten <[email protected]>
Co-authored-by: Alex Strick van Linschoten <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not much else to say beyond these comments (and the ones from before). Looks good overall.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few tiny nits, lgtm otherwise
@htahir1 are we ok to merge this? It will replace the current LORA example with PEFT one and LitGPT will be moved to a new directory. Any blockers? |
Just one more thing that came to my mind: We should probably also rename the |
Update e2e files for new project (#99)
This PR brings in the pipeline to run Mistral model fine-tuning using PEFT library on Viggio dataset.
Key highlights:
llm-litgpt-finetuning
and this new project becomesllm-lora-finetuning
P.S. Diff is a bit off due to project movement. LitGPT one was lift and shift - no changes in it.
Template update: zenml-io/template-llm-finetuning#4